137 research outputs found

    Bladder segmentation in MRI images using active region growing model.

    No full text
    International audienceProstate segmentation in MRI may be difficult at the interface with the bladder where the contrast is poor. Coupled-models that segment simultaneously both organs with non-overlapping constraints offer a good solution. As a pre-segmentation of the structures of interest is required, we propose in this paper a fast deformable model to segment the bladder. The combination of inflation and internal forces, locally adapted according to the gray levels, allow to deform the mesh toward the boundaries while overcoming the leakage issues that can occur at weak edges. The algorithm, evaluated on 33 MRI volumes from 5 different devices, has shown good performance providing a smooth and accurate surface

    Adaptation and evaluation of the multiple organs OSD for T2 MRI prostate segmentation

    No full text
    International audienceThis paper deals with the adaptation, the tuning and the evaluation of the multiple organs Optimal Surface Detection (OSD) algorithm for the T2 MRI prostate segmentation. This algorithm is initialized by first surface approximations of the prostate (obtained after a model adjustment), the bladder (obtained automatically) and the rectum (interactive geometrical model). These three organs are then segmented together in a multiple organs OSD scheme which proposes a competition between the gray level characteristics and some topological and anatomical information of these three organs. This method has been evaluated on the MICCAI Grand Challenge: Prostate MR Image Segmentation (PROMISE) 2012 training dataset

    READ: Recurrent Adaptation of Large Transformers

    Full text link
    Fine-tuning large-scale Transformers has led to the explosion of many AI applications across Natural Language Processing and Computer Vision tasks. However, fine-tuning all pre-trained model parameters becomes impractical as the model size and number of tasks increase. Parameter-efficient transfer learning (PETL) methods aim to address these challenges. While effective in reducing the number of trainable parameters, PETL methods still require significant energy and computational resources to fine-tune. In this paper, we introduce \textbf{RE}current \textbf{AD}aption (READ) -- a lightweight and memory-efficient fine-tuning method -- to overcome the limitations of the current PETL approaches. Specifically, READ inserts a small RNN network alongside the backbone model so that the model does not have to back-propagate through the large backbone network. Through comprehensive empirical evaluation of the GLUE benchmark, we demonstrate READ can achieve a 56%56\% reduction in the training memory consumption and an 84%84\% reduction in the GPU energy usage while retraining high model quality compared to full-tuning. Additionally, the model size of READ does not grow with the backbone model size, making it a highly scalable solution for fine-tuning large Transformers

    MAD Max Beyond Single-Node: Enabling Large Machine Learning Model Acceleration on Distributed Systems

    Full text link
    Training and deploying large machine learning (ML) models is time-consuming and requires significant distributed computing infrastructures. Based on real-world large model training on datacenter-scale infrastructures, we show 14~32% of all GPU hours are spent on communication with no overlapping computation. To minimize the outstanding communication latency, in this work, we develop an agile performance modeling framework to guide parallelization and hardware-software co-design strategies. Using the suite of real-world large ML models on state-of-the-art GPU training hardware, we demonstrate 2.24x and 5.27x throughput improvement potential for pre-training and inference scenarios, respectively
    • …
    corecore